Skip to main content
Version: 4.5

Deploying Models with Ensemble Techniques

You can also deploy a sequence of models using Katonic by giving a list of models in the loadmodel function and model files along with other required files. This feature is now supported across all model types, with the constraint that we presume models to be evaluated in the order in which they are addressed. If you need transformed data across models, please use the pre-processing methods.

For example, if you have a Random Forest model that takes in a 1D vector and outputs a single class prediction,a XGB model that takes in a 1D vector and outputs a single class prediction, and a Catboost model with same configuration you can deploy them as parallel models by supplying the following to the function and decide on the prediction result:

from collections import Counter

def loadmodel(logger):
"""Get model from cloud object storage."""
logger.info("loading model")
#### if you are you are defining any specific files in the code, you must just directly pass the the file name
TRAINED_MODEL_FILEPATH1 = "random_forest.pickle"
TRAINED_MODEL_FILEPATH2 = "xgb.pickle"
TRAINED_MODEL_FILEPATH3 = "catb.pickle"
with open(TRAINED_MODEL_FILEPATH1 , 'rb') as f:
model1 = pickle.load(f)
with open(TRAINED_MODEL_FILEPATH2 , 'rb') as f:
model2 = pickle.load(f)
with open(TRAINED_MODEL_FILEPATH3 , 'rb') as f:
model3 = pickle.load(f)

return [model1,model2,model3]


def preprocessing(df,logger):
""" Applies preprocessing techniques to the raw data"""
## define the required preprocessing steps

def predict(features,model,logger):
"""Predicts the results for the given inputs"""
try:
logger.info("model prediction")
model1,model2,model3 = model
prediction1 = model1.predict(features)
prediction2 = model2.predict(features)
prediction3 = model3.predict(features)
c = Counter([prediction1,prediction2,prediction3])
prediction_result = c.most_common()[0]
except Exception as e:
logger.info(e)
return(e)
return prediction_result